21 research outputs found

    VLSI Architecture for Configurable and Low-Complexity Design of Hard-Decision Viterbi Decoding Algorithm

    Get PDF
    Convolutional encoding and data decoding are fundamental processes in convolutional error correction. One of the most popular error correction methods in decoding is the Viterbi algorithm. It is extensively implemented in many digital communication applications. Its VLSI design challenges are about area, speed, power, complexity and configurability. In this research, we specifically propose a VLSI architecture for a configurable and low-complexity design of a hard-decision Viterbi decoding algorithm. The configurable and low-complexity design is achieved by designing a generic VLSI architecture, optimizing each processing element (PE) at the logical operation level and designing a conditional adapter. The proposed design can be configured for any predefined number of trace-backs, only by changing the trace-back parameter value. Its computational process only needs N + 2 clock cycles latency, with N is the number of trace-backs. Its configurability function has been proven for N = 8, N = 16, N = 32 and N = 64. Furthermore, the proposed design was synthesized and evaluated in Xilinx and Altera FPGA target boards for area consumption and speed performance

    SpikeDyn: A Framework for Energy-Efficient Spiking Neural Networks with Continual and Unsupervised Learning Capabilities in Dynamic Environments

    Full text link
    Spiking Neural Networks (SNNs) bear the potential of efficient unsupervised and continual learning capabilities because of their biological plausibility, but their complexity still poses a serious research challenge to enable their energy-efficient design for resource-constrained scenarios (like embedded systems, IoT-Edge, etc.). We propose SpikeDyn, a comprehensive framework for energy-efficient SNNs with continual and unsupervised learning capabilities in dynamic environments, for both the training and inference phases. It is achieved through the following multiple diverse mechanisms: 1) reduction of neuronal operations, by replacing the inhibitory neurons with direct lateral inhibitions; 2) a memory- and energy-constrained SNN model search algorithm that employs analytical models to estimate the memory footprint and energy consumption of different candidate SNN models and selects a Pareto-optimal SNN model; and 3) a lightweight continual and unsupervised learning algorithm that employs adaptive learning rates, adaptive membrane threshold potential, weight decay, and reduction of spurious updates. Our experimental results show that, for a network with 400 excitatory neurons, our SpikeDyn reduces the energy consumption on average by 51% for training and by 37% for inference, as compared to the state-of-the-art. Due to the improved learning algorithm, SpikeDyn provides on avg. 21% accuracy improvement over the state-of-the-art, for classifying the most recently learned task, and by 8% on average for the previously learned tasks.Comment: To appear at the 58th IEEE/ACM Design Automation Conference (DAC), December 2021, San Francisco, CA, US

    Q-SpiNN: A Framework for Quantizing Spiking Neural Networks

    Full text link
    A prominent technique for reducing the memory footprint of Spiking Neural Networks (SNNs) without decreasing the accuracy significantly is quantization. However, the state-of-the-art only focus on employing the weight quantization directly from a specific quantization scheme, i.e., either the post-training quantization (PTQ) or the in-training quantization (ITQ), and do not consider (1) quantizing other SNN parameters (e.g., neuron membrane potential), (2) exploring different combinations of quantization approaches (i.e., quantization schemes, precision levels, and rounding schemes), and (3) selecting the SNN model with a good memory-accuracy trade-off at the end. Therefore, the memory saving offered by these state-of-the-art to meet the targeted accuracy is limited, thereby hindering processing SNNs on the resource-constrained systems (e.g., the IoT-Edge devices). Towards this, we propose Q-SpiNN, a novel quantization framework for memory-efficient SNNs. The key mechanisms of the Q-SpiNN are: (1) employing quantization for different SNN parameters based on their significance to the accuracy, (2) exploring different combinations of quantization schemes, precision levels, and rounding schemes to find efficient SNN model candidates, and (3) developing an algorithm that quantifies the benefit of the memory-accuracy trade-off obtained by the candidates, and selects the Pareto-optimal one. The experimental results show that, for the unsupervised network, the Q-SpiNN reduces the memory footprint by ca. 4x, while maintaining the accuracy within 1% from the baseline on the MNIST dataset. For the supervised network, the Q-SpiNN reduces the memory by ca. 2x, while keeping the accuracy within 2% from the baseline on the DVS-Gesture dataset.Comment: Accepted for publication at the 2021 International Joint Conference on Neural Networks (IJCNN), July 2021, Virtual Even

    Mantis: Enabling Energy-Efficient Autonomous Mobile Agents with Spiking Neural Networks

    Full text link
    Autonomous mobile agents such as unmanned aerial vehicles (UAVs) and mobile robots have shown huge potential for improving human productivity. These mobile agents require low power/energy consumption to have a long lifespan since they are usually powered by batteries. These agents also need to adapt to changing/dynamic environments, especially when deployed in far or dangerous locations, thus requiring efficient online learning capabilities. These requirements can be fulfilled by employing Spiking Neural Networks (SNNs) since SNNs offer low power/energy consumption due to sparse computations and efficient online learning due to bio-inspired learning mechanisms. However, a methodology is still required to employ appropriate SNN models on autonomous mobile agents. Towards this, we propose a Mantis methodology to systematically employ SNNs on autonomous mobile agents to enable energy-efficient processing and adaptive capabilities in dynamic environments. The key ideas of our Mantis include the optimization of SNN operations, the employment of a bio-plausible online learning mechanism, and the SNN model selection. The experimental results demonstrate that our methodology maintains high accuracy with a significantly smaller memory footprint and energy consumption (i.e., 3.32x memory reduction and 2.9x energy saving for an SNN model with 8-bit weights) compared to the baseline network with 32-bit weights. In this manner, our Mantis enables the employment of SNNs for resource- and energy-constrained mobile agents.Comment: To appear at the 2023 International Conference on Automation, Robotics and Applications (ICARA), February 2023, Abu Dhabi, UAE. arXiv admin note: text overlap with arXiv:2206.0865

    TopSpark: A Timestep Optimization Methodology for Energy-Efficient Spiking Neural Networks on Autonomous Mobile Agents

    Full text link
    Autonomous mobile agents require low-power/energy-efficient machine learning (ML) algorithms to complete their ML-based tasks while adapting to diverse environments, as mobile agents are usually powered by batteries. These requirements can be fulfilled by Spiking Neural Networks (SNNs) as they offer low power/energy processing due to their sparse computations and efficient online learning with bio-inspired learning mechanisms for adapting to different environments. Recent works studied that the energy consumption of SNNs can be optimized by reducing the computation time of each neuron for processing a sequence of spikes (timestep). However, state-of-the-art techniques rely on intensive design searches to determine fixed timestep settings for only inference, thereby hindering the SNNs from achieving further energy efficiency gains in both training and inference. These techniques also restrict the SNNs from performing efficient online learning at run time. Toward this, we propose TopSpark, a novel methodology that leverages adaptive timestep reduction to enable energy-efficient SNN processing in both training and inference, while keeping its accuracy close to the accuracy of SNNs without timestep reduction. The ideas of TopSpark include: analyzing the impact of different timesteps on the accuracy; identifying neuron parameters that have a significant impact on accuracy in different timesteps; employing parameter enhancements that make SNNs effectively perform learning and inference using less spiking activity; and developing a strategy to trade-off accuracy, latency, and energy to meet the design requirements. The results show that, TopSpark saves the SNN latency by 3.9x as well as energy consumption by 3.5x (training) and 3.3x (inference) on average, across different network sizes, learning rules, and workloads, while maintaining the accuracy within 2% of SNNs without timestep reduction.Comment: To appear at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2023, Detroit, MI, US

    FSpiNN: An Optimization Framework for Memory- and Energy-Efficient Spiking Neural Networks

    Full text link
    Spiking Neural Networks (SNNs) are gaining interest due to their event-driven processing which potentially consumes low power/energy computations in hardware platforms, while offering unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule. However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy, thereby making them difficult to be deployed on embedded systems, for instance on battery-powered mobile devices and IoT Edge nodes. Towards this, we propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing, with unsupervised learning capability while maintaining accuracy. It is achieved by (1) reducing the computational requirements of neuronal and STDP operations, (2) improving the accuracy of STDP-based learning, (3) compressing the SNN through a fixed-point quantization, and (4) incorporating the memory and energy requirements in the optimization process. FSpiNN reduces the computational requirements by reducing the number of neuronal operations, the STDP-based synaptic weight updates, and the STDP complexity. To improve the accuracy of learning, FSpiNN employs timestep-based synaptic weight updates, and adaptively determines the STDP potentiation factor and the effective inhibition strength. The experimental results show that, as compared to the state-of-the-art work, FSpiNN achieves 7.5x memory saving, and improves the energy-efficiency by 3.5x on average for training and by 1.8x on average for inference, across MNIST and Fashion MNIST datasets, with no accuracy loss for a network with 4900 excitatory neurons, thereby enabling energy-efficient SNNs for edge devices/embedded systems.Comment: To appear at the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (IEEE-TCAD), as part of the ESWEEK-TCAD Special Issue, September 202

    Noise and Bandwidth Consideration in Designing Op-Amp Based Transimpedance Amplifier for VLC

    Get PDF
    In a visible light communication (VLC) system, there are many modules involved. One of the important modules is Transimpedance Amplifier (TIA) that resides in the analog front-end receiver (Rx-AFE). TIA is responsible for performing signal conversion from current signal, which is provided from the photodiode (PD) to voltage signal. It is the reason why the TIA should be operating in low noise condition and wide bandwidth of frequency. These will enable a flexible coverage of the VLC system in performing its signal processing. Hence, in this research, we provide considerations of the noise and frequency bandwidth analysis in designing TIA to cope with the required design specification of a VLC system

    SparkXD: A Framework for Resilient and Energy-Efficient Spiking Neural Network Inference using Approximate DRAM

    Full text link
    Spiking Neural Networks (SNNs) have the potential for achieving low energy consumption due to their biologically sparse computation. Several studies have shown that the off-chip memory (DRAM) accesses are the most energy-consuming operations in SNN processing. However, state-of-the-art in SNN systems do not optimize the DRAM energy-per-access, thereby hindering achieving high energy-efficiency. To substantially minimize the DRAM energy-per-access, a key knob is to reduce the DRAM supply voltage but this may lead to DRAM errors (i.e., the so-called approximate DRAM). Towards this, we propose SparkXD, a novel framework that provides a comprehensive conjoint solution for resilient and energy-efficient SNN inference using low-power DRAMs subjected to voltage-induced errors. The key mechanisms of SparkXD are: (1) improving the SNN error tolerance through fault-aware training that considers bit errors from approximate DRAM, (2) analyzing the error tolerance of the improved SNN model to find the maximum tolerable bit error rate (BER) that meets the targeted accuracy constraint, and (3) energy-efficient DRAM data mapping for the resilient SNN model that maps the weights in the appropriate DRAM location to minimize the DRAM access energy. Through these mechanisms, SparkXD mitigates the negative impact of DRAM (approximation) errors, and provides the required accuracy. The experimental results show that, for a target accuracy within 1% of the baseline design (i.e., SNN without DRAM errors), SparkXD reduces the DRAM energy by ca. 40% on average across different network sizes.Comment: To appear at the 58th IEEE/ACM Design Automation Conference (DAC), December 2021, San Francisco, CA, US

    EnforceSNN: Enabling Resilient and Energy-Efficient Spiking Neural Network Inference considering Approximate DRAMs for Embedded Systems

    Full text link
    Spiking Neural Networks (SNNs) have shown capabilities of achieving high accuracy under unsupervised settings and low operational power/energy due to their bio-plausible computations. Previous studies identified that DRAM-based off-chip memory accesses dominate the energy consumption of SNN processing. However, state-of-the-art works do not optimize the DRAM energy-per-access, thereby hindering the SNN-based systems from achieving further energy efficiency gains. To substantially reduce the DRAM energy-per-access, an effective solution is to decrease the DRAM supply voltage, but it may lead to errors in DRAM cells (i.e., so-called approximate DRAM). Towards this, we propose \textit{EnforceSNN}, a novel design framework that provides a solution for resilient and energy-efficient SNN inference using reduced-voltage DRAM for embedded systems. The key mechanisms of our EnforceSNN are: (1) employing quantized weights to reduce the DRAM access energy; (2) devising an efficient DRAM mapping policy to minimize the DRAM energy-per-access; (3) analyzing the SNN error tolerance to understand its accuracy profile considering different bit error rate (BER) values; (4) leveraging the information for developing an efficient fault-aware training (FAT) that considers different BER values and bit error locations in DRAM to improve the SNN error tolerance; and (5) developing an algorithm to select the SNN model that offers good trade-offs among accuracy, memory, and energy consumption. The experimental results show that our EnforceSNN maintains the accuracy (i.e., no accuracy loss for BER less-or-equal 10^-3) as compared to the baseline SNN with accurate DRAM, while achieving up to 84.9\% of DRAM energy saving and up to 4.1x speed-up of DRAM data throughput across different network sizes.Comment: Accepted for publication at Frontiers in Neuroscience - Section Neuromorphic Engineerin

    RescueSNN: Enabling Reliable Executions on Spiking Neural Network Accelerators under Permanent Faults

    Full text link
    To maximize the performance and energy efficiency of Spiking Neural Network (SNN) processing on resource-constrained embedded systems, specialized hardware accelerators/chips are employed. However, these SNN chips may suffer from permanent faults which can affect the functionality of weight memory and neuron behavior, thereby causing potentially significant accuracy degradation and system malfunctioning. Such permanent faults may come from manufacturing defects during the fabrication process, and/or from device/transistor damages (e.g., due to wear out) during the run-time operation. However, the impact of permanent faults in SNN chips and the respective mitigation techniques have not been thoroughly investigated yet. Toward this, we propose RescueSNN, a novel methodology to mitigate permanent faults in the compute engine of SNN chips without requiring additional retraining, thereby significantly cutting down the design time and retraining costs, while maintaining the throughput and quality. The key ideas of our RescueSNN methodology are (1) analyzing the characteristics of SNN under permanent faults; (2) leveraging this analysis to improve the SNN fault-tolerance through effective fault-aware mapping (FAM); and (3) devising lightweight hardware enhancements to support FAM. Our FAM technique leverages the fault map of SNN compute engine for (i) minimizing weight corruption when mapping weight bits on the faulty memory cells, and (ii) selectively employing faulty neurons that do not cause significant accuracy degradation to maintain accuracy and throughput, while considering the SNN operations and processing dataflow. The experimental results show that our RescueSNN improves accuracy by up to 80% while maintaining the throughput reduction below 25% in high fault rate (e.g., 0.5 of the potential fault locations), as compared to running SNNs on the faulty chip without mitigation.Comment: Accepted for publication at Frontiers in Neuroscience - Section Neuromorphic Engineerin
    corecore